Penalized Likelihood Estimation: Convergence under Incorrect Model

نویسنده

  • Chong Gu
چکیده

Penalized likelihood method is among the most eeective tools for nonparametric multivari-ate function estimation. Recently, a generic computation-oriented asymptotic theory has been developed in the density estimation setting, and been extended to other settings such as conditional density estimation, regression, and hazard rate estimation, under the assumption that the true function resides in a reproducing kernel Hilbert space in which the estimate is sought. In this article, we illustrate that the theory may remain valid, after appropriate modiications, even when the true function resides outside of the function space under consideration. Through a certain moment identity, it is shown that the Kullback-Leibler projection of the true function in the function space under consideration, if it exists, acts as the proxy of the true function as the destination of asymptotic convergence.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

Estimation in Generalized Linear Models for Functional Data via Penalized Likelihood

We analyze in a regression setting the link between a scalar response and a functional predictor by means of a Functional Generalized Linear Model. We first give a theoretical framework and then discuss identifiability of the model. The functional coefficient of the model is estimated via penalized likelihood with spline approximation. The L2 rate of convergence of this estimator is given under...

متن کامل

Generalized Linear Model Regression under Distance-to-set Penalties

Estimation in generalized linear models (GLM) is complicated by the presence of constraints. One can handle constraints by maximizing a penalized log-likelihood. Penalties such as the lasso are effective in high dimensions, but often lead to unwanted shrinkage. This paper explores instead penalizing the squared distance to constraint sets. Distance penalties are more flexible than algebraic and...

متن کامل

Adaptive Penalized M-estimation with Current Status Data

Current status data arises when a continuous response is reduced to an indicator of whether the response is greater or less than a random threshold value. In this article we consider adaptive penalized M-estimators (including the penalized least squares estimators and the penalized maximum likelihood estimators) for nonparametric and semiparametric models with current status data, under the ass...

متن کامل

Some upper bounds for the rate of convergence of penalized likelihood context tree estimators

Abstract. We find upper bounds for the probability of underestimation and overestimation errors in penalized likelihood context tree estimation. The bounds are explicit and applies to processes of not necessarily finite memory. We allow for general penalizing terms and we give conditions over the maximal depth of the estimated trees in order to get strongly consistent estimates. This generalize...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007